13 research outputs found

    Superdevelopments for Weak Reduction

    Full text link
    We study superdevelopments in the weak lambda calculus of Cagman and Hindley, a confluent variant of the standard weak lambda calculus in which reduction below lambdas is forbidden. In contrast to developments, a superdevelopment from a term M allows not only residuals of redexes in M to be reduced but also some newly created ones. In the lambda calculus there are three ways new redexes may be created; in the weak lambda calculus a new form of redex creation is possible. We present labeled and simultaneous reduction formulations of superdevelopments for the weak lambda calculus and prove them equivalent

    A Strong Distillery

    Get PDF
    Abstract machines for the strong evaluation of lambda-terms (that is, under abstractions) are a mostly neglected topic, despite their use in the implementation of proof assistants and higher-order logic programming languages. This paper introduces a machine for the simplest form of strong evaluation, leftmost-outermost (call-by-name) evaluation to normal form, proving it correct, complete, and bounding its overhead. Such a machine, deemed Strong Milner Abstract Machine, is a variant of the KAM computing normal forms and using just one global environment. Its properties are studied via a special form of decoding, called a distillation, into the Linear Substitution Calculus, neatly reformulating the machine as a standard micro-step strategy for explicit substitutions, namely linear leftmost-outermost reduction, i.e., the extension to normal form of linear head reduction. Additionally, the overhead of the machine is shown to be linear both in the number of steps and in the size of the initial term, validating its design. The study highlights two distinguished features of strong machines, namely backtracking phases and their interactions with abstractions and environments.Comment: Accepted at APLAS 201

    Distilling Abstract Machines (Long Version)

    Full text link
    It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between big-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature. We start by distilling the KAM, the CEK, and the ZINC, and then provide simplified versions of the SECD, the lazy KAM, and Sestoft's machine. Along the way we also introduce some new machines with global environments. Moreover, we show that distillation preserves the time complexity of the executions, i.e. the LSC is a complexity-preserving abstraction of abstract machines.Comment: 63 page

    Reductions in Higher-Order Rewriting and Their Equivalence

    Get PDF
    Proof terms are syntactic expressions that represent computations in term rewriting. They were introduced by Meseguer and exploited by van Oostrom and de Vrijer to study {\em equivalence of reductions} in (left-linear) first-order term rewriting systems. We study the problem of extending the notion of proof term to {\em higher-order rewriting}, which generalizes the first-order setting by allowing terms with binders and higher-order substitution. In previous works that devise proof terms for higher-order rewriting, such as Bruggink's, it has been noted that the challenge lies in reconciling composition of proof terms and higher-order substitution (β\beta-equivalence). This led Bruggink to reject ``nested'' composition, other than at the outermost level. In this paper, we propose a notion of higher-order proof term we dub \emph{rewrites} that supports nested composition. We then define {\em two} notions of equivalence on rewrites, namely {\em permutation equivalence} and {\em projection equivalence}, and show that they coincide

    Reductions in Higher-Order Rewriting and Their Equivalence

    Get PDF

    Two Decreasing Measures for Simply Typed ?-Terms

    Get PDF
    This paper defines two decreasing measures for terms of the simply typed ?-calculus, called the ?-measure and the ?^{?}-measure. A decreasing measure is a function that maps each typable ?-term to an element of a well-founded ordering, in such a way that contracting any ?-redex decreases the value of the function, entailing strong normalization. Both measures are defined constructively, relying on an auxiliary calculus, a non-erasing variant of the ?-calculus. In this system, dubbed the ?^{?}-calculus, each ?-step creates a "wrapper" containing a copy of the argument that cannot be erased and cannot interact with the context in any other way. Both measures rely crucially on the observation, known to Turing and Prawitz, that contracting a redex cannot create redexes of higher degree, where the degree of a redex is defined as the height of the type of its ?-abstraction. The ?-measure maps each ?-term to a natural number, and it is obtained by evaluating the term in the ?^{?}-calculus and counting the number of remaining wrappers. The ?^{?}-measure maps each ?-term to a structure of nested multisets, where the nesting depth is proportional to the maximum redex degree

    Proofs and Refutations for Intuitionistic and Second-Order Logic

    Get PDF
    The ?^{PRK}-calculus is a typed ?-calculus that exploits the duality between the notions of proof and refutation to provide a computational interpretation for classical propositional logic. In this work, we extend ?^{PRK} to encompass classical second-order logic, by incorporating parametric polymorphism and existential types. The system is shown to enjoy good computational properties, such as type preservation, confluence, and strong normalization, which is established by means of a reducibility argument. We identify a syntactic restriction on proofs that characterizes exactly the intuitionistic fragment of second-order ?^{PRK}, and we study canonicity results

    Efficient repeat finding in sets of strings via suffix arrays

    Get PDF
    We consider two repeat finding problems relative to sets of strings: (a) Find the largest substrings that occur in every string of a given set; (b) Find the maximal repeats in a given string that occur in no string of a given set. Our solutions are based on the suffix array construction, requiring O(m) memory, where m is the length of the longest input string, and O(n &log;m) time, where n is the the whole input size (the sum of the length of each string in the input). The most expensive part of our algorithms is the computation of several suffix arrays. We give an implementation and experimental results that evidence the efficiency of our algorithms in practice, even for very large inputs.Fil: Barenbaum, Pablo. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina;Fil: Becher, Veronica Andrea. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina; Consejo Nacional de Investigaciones Cientificas y Tecnicas. Oficina de Coordinacion Administrativa Ciudad Universitaria; Argentina;Fil: Deymonnaz, Alejandro. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina;Fil: Halsband, Melisa. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina;Fil: Heiber, Pablo Ariel. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina; Consejo Nacional de Investigaciones Cientificas y Tecnicas. Oficina de Coordinacion Administrativa Ciudad Universitaria; Argentina

    Sémantique dynamique des calculs avec substitutions explicites à distance

    No full text
    Explicit substitution calculi are variants of the lambda-calculus in which the operation of substitution is not defined at the metalanguage level, but rather implemented by means of rewriting rules. Our main object of study is a particular explicit substitution calculus, the Linear Substitution Calculus (LSC), introduced by Accattoli and Kesner in 2010. Its distinguishing feature is that rewriting rules operate non-locally (at a distance). In this thesis, first, we define abstract machines to implement evaluation strategies in the LSC: call-by-name for weak and strong evaluation, call-by-value, and call-by-need. We prove that these machines are correct and that they preserve computational time complexity. Second, we define an extension of the call-by-need evaluation strategy in the LSC for strong reduction. We show that the strong call-by-need strategy is complete with respect to call-by-name, using a non-idempotent intersection type system, and we show how to extend the strategy to deal with pattern matching and recursion. Finally, we study the theory of residuals and redex families in the LSC. To this aim, we define a variant of the LSC endowed with Lévy labels, which allows us to prove that it enjoys the Finite Family Developments property. We apply this property to obtain results on optimality, standardization, and normalization for the LSC, and we generalize some of this results to the axiomatic framework of Deterministic Family Structures.Les calculs de substitution explicite sont des variantes du lambda-calcul dans lesquelles l'opération de substitution n'est pas définie au niveau du métalangage, mais plutôt implémentée au moyen de règles de réécriture. Notre principal objet d'étude est un calcul de substitution explicite particulier, le Linear Substitution Calculus (LSC), introduit par Accattoli et Kesner en 2010. Sa particularité est que les règles de réécriture opèrent de manière non locale (à distance). Dans cette thèse, tout d'abord, nous définissons des machines abstraites pour implémenter des stratégies d'évaluation dans le LSC : appel par nom pour une évaluation faible et forte, appel par valeur et appel par besoin. Nous montrons que ces machines sont correctes et qu'elles préservent la complexité en temps de calcul. Deuxièmement, nous définissons une extension de la stratégie d'évaluation appel par besoin dans le LSC pour une forte réduction. Nous montrons que la stratégie d'appel par besoin forte est complète en ce qui concerne l'appel par nom, en utilisant un système de type intersection non idempotent, et nous montrons comment étendre la stratégie pour traiter la correspondance de motifs et la récursivité. Enfin, nous étudions la théorie des résidus et des familles redex dans le LSC. Pour cela, nous définissons une variante du LSC dotée de labels Lévy, ce qui nous permet de prouver qu'il bénéficie de la propriété Finite Family Developments. Nous appliquons cette propriété pour obtenir des résultats sur l'optimalité, la standardisation et la normalisation pour le LSC, et nous généralisons certains de ces résultats au cadre axiomatique des structures familiales déterministes
    corecore